powerful ai system
Scores of UK parliamentarians join call to regulate most powerful AI systems
The campaign is demanding stricter controls on frontier systems, citing fears superintelligent AI could'compromise national and global security'. The campaign is demanding stricter controls on frontier systems, citing fears superintelligent AI could'compromise national and global security'. More than 100 UK parliamentarians are calling on the government to introduce binding regulations on the most powerful AI systems as concern grows that ministers are moving too slowly to create safeguards in the face of lobbying from the technology industry. A former AI minister and defence secretary are part of a cross-party group of Westminster MPs, peers and elected members of the Scottish, Welsh and Northern Irish legislatures demanding stricter controls on frontier systems, citing fears superintelligent AI "would compromise national and global security". The push for tougher regulation is being coordinated by a nonprofit organisation called Control AI whose backers include the co-founder of Skype, Jaan Tallinn.
- Europe > United Kingdom > Northern Ireland (0.25)
- Europe > Estonia > Harju County > Tallinn (0.25)
- Europe > Ukraine (0.06)
- (3 more...)
- Leisure & Entertainment > Sports (0.71)
- Government > Regional Government > Europe Government (0.36)
- Government > Regional Government > North America Government > United States Government (0.31)
What are the odds? Risk and uncertainty about AI existential risk
This work is a commentary of the article \href{https://doi.org/10.18716/ojs/phai/2025.2801}{AI Survival Stories: a Taxonomic Analysis of AI Existential Risk} by Cappelen, Goldstein, and Hawthorne. It is not just a commentary though, but a useful reminder of the philosophical limitations of \say{linear} models of risk. The article will focus on the model employed by the authors: first, I discuss some differences between standard Swiss Cheese models and this one. I then argue that in a situation of epistemic indifference the probability of P(D) is higher than what one might first suggest, given the structural relationships between layers. I then distinguish between risk and uncertainty, and argue that any estimation of P(D) is structurally affected by two kinds of uncertainty: option uncertainty and state-space uncertainty. Incorporating these dimensions of uncertainty into our qualitative discussion on AI existential risk can provide a better understanding of the likeliness of P(D).
- Oceania > Australia > Queensland (0.04)
- North America > United States > Massachusetts > Middlesex County > Reading (0.04)
- Europe > Ukraine > Kyiv Oblast > Chernobyl (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
Silicon Valley Takes Artificial General Intelligence Seriously--Washington Must Too
Artificial General Intelligence--machines that can learn and perform any cognitive task that a human can--has long been relegated to the realm of science fiction. But recent developments show that AGI is no longer a distant speculation; it's an impending reality that demands our immediate attention. On Sept. 17, during a Senate Judiciary Subcommittee hearing titled "Oversight of AI: Insiders' Perspectives," whistleblowers from leading AI companies sounded the alarm on the rapid advancement toward AGI and the glaring lack of oversight. Helen Toner, a former board member of OpenAI and director of strategy at Georgetown University's Center for Security and Emerging Technology, testified that, "The biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence." She continued that leading AI companies such as OpenAI, Google, and Anthropic are "treating building AGI as an entirely serious goal."
- Government (0.93)
- Information Technology > Security & Privacy (0.30)
- Information Technology > Artificial Intelligence > Cognitive Science (0.84)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.78)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.62)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.52)
Nobody Knows How to Safety-Test AI
Beth Barnes and three of her colleagues sit cross-legged in a semicircle on a damp lawn on the campus of the University of California, Berkeley. They are describing their attempts to interrogate artificial intelligence chatbots. "They are, in some sense, these vast alien intelligences," says Barnes, 26, who is the founder and CEO of Model Evaluation and Threat Research (METR), an AI-safety nonprofit. "They know so much about whether the next word is going to be'is' versus'was.' We're just playing with a tiny bit on the surface, and there's all this, miles and miles underneath," she says, gesturing at the potentially immense depths of large language models' capabilities. Researchers at METR look a lot like Berkeley students--the four on the lawn are in their twenties and dressed in jeans or sweatpants.
- North America > United States > California > Alameda County > Berkeley (0.24)
- Asia > China (0.04)
Fox News AI Newsletter: Artificial intelligence-designed drug
HIGH-TECH HEALTH: Inflammatory bowel disease impacts 1.6 million people in the U.S. -- and a new artificial intelligence-generated drug could help alleviate symptoms. AI SAFETY: The White House says "developers of the most powerful AI systems" will now have to report AI safety test results to the Department of Commerce in the wake of an executive order issued by President Biden aimed at "managing the risks" of the technology. HIGH-TECH HILL: A top House Republican lawmaker is eyeing the opportunities and risks of integrating artificial intelligence technology into the day-to-day operations of the U.S. Congress. 'MEMORY RESTORED': Restoring your memories of a vague childhood toy, movie, video game or book that's been on the tip of your tongue for years could be as simple as plugging a couple of sentences into a chatbot, some users say. WARTIME AI: Israel's Defense Ministry is taking advantage of its country's vibrant high-tech scene to create an artificial intelligence-driven information platform that will help keep track of the increasingly deteriorating humanitarian situation in the Gaza Strip, even as Israeli troops continue to battle the Iranian-backed Islamist terror group Hamas, Fox News Digital has learned.
- Asia > Middle East > Palestine > Gaza Strip (0.27)
- Asia > Middle East > Israel (0.27)
- North America > United States > Colorado (0.07)
White House: Developers of 'powerful AI systems' now have to report safety test results to government
The White House says "developers of the most powerful AI systems" will now have to report AI safety test results to the Department of Commerce in the wake of an executive order issued by President Biden aimed at "managing the risks" of the technology. The news comes as Deputy Chief of Staff Bruce Reed is convening the White House AI Council on Monday, consisting of "top officials from a wide range of federal departments and agencies" who have reported completing 90-day actions and advancing other directives tasked by the order Biden signed last October, according to the White House. Among those actions was that they "[u]sed Defense Production Act authorities to compel developers of the most powerful AI systems to report vital information, especially AI safety test results, to the Department of Commerce," the White House said. "These companies now must share this information on the most powerful AI systems, and they must likewise report large computing clusters able to train these systems," the White House added. The White House announced Monday that companies that are working on the "most powerful AI systems" must now report "AI safety test results" to the Department of Commerce.
- North America > United States > South Carolina > Richland County > Columbia (0.06)
- North America > United States > District of Columbia > Washington (0.06)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.06)
- Asia > Middle East > Israel (0.06)
'Very scary': Mark Zuckerberg's pledge to build advanced AI alarms experts
Mark Zuckerberg has been accused of taking an irresponsible approach to artificial intelligence after committing to building a powerful AI system on a par with human levels of intelligence. The Facebook founder has also raised the prospect of making it freely available to the public. The Meta chief executive has said the company will attempt to build an artificial general intelligence (AGI) system and make it open source, meaning it will be accessible to developers outside the company. The system should be made "as widely available as we responsibly can", he added. In a Facebook post, Zuckerberg said it was clear that the next generation of tech services "requires building full general intelligence".
- Europe > United Kingdom (0.16)
- North America > United States > California (0.05)
- Europe > Switzerland (0.05)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (0.83)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.78)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.55)
AI firms 'should include members of public on boards to protect society'
Companies developing powerful artificial intelligence systems must have independent board members representing the "interests of society", according to an expert regarded as one of the modern godfathers of the technology. Yoshua Bengio, a co-winner of the 2018 Turing Award – referred to as the "Nobel prize of computing" – said AI firms must have oversight from members of the public, as advances in the technology accelerate rapidly. Speaking in the wake of the boardroom upheaval at the ChatGPT developer OpenAI, including the exit and return of its chief executive, Sam Altman, Bengio said a "democratic process" was needed to monitor developments in the field. "How do we make sure that these advances are happening in a way that doesn't endanger the public? How do we make sure that they're not abused for increasing one's power?" the AI pioneer told the Guardian. "To me, the answer is obvious in principle.
- North America > United States (0.49)
- North America > Canada > Quebec > Montreal (0.05)
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.05)
Unpacking the hype around OpenAI's rumored new Q* model
While we still don't know all the details, there have been reports that researchers at OpenAI had made a "breakthrough" in AI that had alarmed staff members. Reuters and The Information both report that researchers had come up with a new way to make powerful AI systems and had created a new model, called Q* (pronounced Q star), that was able to perform grade-school-level math. According to the people who spoke to Reuters, some at OpenAI believe this could be a milestone in the company's quest to build artificial general intelligence, a much-hyped concept referring to an AI system that is smarter than humans. The company declined to comment on Q*. Social media is full of speculation and excessive hype, so I called some experts to find out how big a deal any breakthrough in math and AI would really be.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.99)